pragmatic reasoning
- Asia > China (0.28)
- Asia > Middle East > Republic of Türkiye (0.14)
- North America > United States > Washington > King County > Seattle (0.04)
- (15 more...)
- Leisure & Entertainment (1.00)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- (2 more...)
- Europe > Germany > Brandenburg > Potsdam (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (4 more...)
- Research Report (0.68)
- Overview (0.46)
HandMeThat: Human-RobotCommunication inPhysicalandSocialEnvironments
While previousbenchmarks insimilar domains havebeenprimarily focusing onthelanguage grounding of object properties (e.g., "table"), relations (e.g., "on"), and planning (e.g., object search and manipulation) [6,7],inthispaper,wehighlights theadditional challenge forunderstanding human instructions withambiguities (i.e., recognizing the subgoal) based on physical states and human actionsandgoals. Each episode in HandMeThat contains twostages.
Diplomat: A Dialogue Dataset for Situated PragMATic Reasoning
The ability to discern and comprehend pragmatic meanings is a cornerstone of social and emotional intelligence, referred to as pragmatic reasoning. Despite the strides made in the development of Large Language Models (LLMs), such as ChatGPT, these models grapple with capturing the nuanced and ambiguous facets of language, falling short of the aspiration to build human-like conversational agents.
MUStReason: A Benchmark for Diagnosing Pragmatic Reasoning in Video-LMs for Multimodal Sarcasm Detection
Saha, Anisha, Suresh, Varsha, Hospedales, Timothy, Demberg, Vera
Sarcasm is a specific type of irony which involves discerning what is said from what is meant. Detecting sarcasm depends not only on the literal content of an utterance but also on non-verbal cues such as speaker's tonality, facial expressions and conversational context. However, current multimodal models struggle with complex tasks like sarcasm detection, which require identifying relevant cues across modalities and pragmatically reasoning over them to infer the speaker's intention. To explore these limitations in VideoLMs, we introduce MUStReason, a diagnostic benchmark enriched with annotations of modality-specific relevant cues and underlying reasoning steps to identify sarcastic intent. In addition to benchmarking sarcasm classification performance in VideoLMs, using MUStReason we quantitatively and qualitatively evaluate the generated reasoning by disentangling the problem into perception and reasoning, we propose PragCoT, a framework that steers VideoLMs to focus on implied intentions over literal meaning, a property core to detecting sarcasm.
- Europe > Austria > Vienna (0.14)
- Europe > Germany > Saarland (0.05)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- (5 more...)
- Asia > China (0.28)
- Asia > Middle East > Republic of Türkiye (0.14)
- North America > United States > Washington > King County > Seattle (0.04)
- (15 more...)
- Leisure & Entertainment (1.00)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- (2 more...)
0c0a7566915f4f24853fc4192689aa7e-Reviews.html
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This paper presents a probabilistic model for language learning. The authors cover the nature in which a pair of cooperative agents may work together to create an agreed-upon language. One question I have is how this could possibly be implemented in real-world language learning situations. Your evaluation of the emergence of phenomenon seen in real world languages makes me think you are trying to model or learn something about what real world language evolution is like.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Rational Retrieval Acts: Leveraging Pragmatic Reasoning to Improve Sparse Retrieval
Satouf, Arthur, Zenou, Gabriel Ben, Piwowarski, Benjamin, Boubacar, Habiboulaye Amadou, Piantanida, Pablo
Current sparse neural information retrieval (IR) methods, and to a lesser extent more traditional models such as BM25, do not take into account the document collection and the complex interplay between different term weights when representing a single document. In this paper, we show how the Rational Speech Acts (RSA), a linguistics framework used to minimize the number of features to be communicated when identifying an object in a set, can be adapted to the IR case -- and in particular to the high number of potential features (here, tokens). RSA dynamically modulates token-document interactions by considering the influence of other documents in the dataset, better contrasting document representations. Experiments show that incorporating RSA consistently improves multiple sparse retrieval models and achieves state-of-the-art performance on out-of-domain datasets from the BEIR benchmark. https://github.com/arthur-75/Rational-Retrieval-Acts
- Europe > Italy (0.05)
- Europe > France (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- (5 more...)
Diplomat: A Dialogue Dataset for Situated PragMATic Reasoning
The ability to discern and comprehend pragmatic meanings is a cornerstone of social and emotional intelligence, referred to as pragmatic reasoning. Despite the strides made in the development of Large Language Models (LLMs), such as ChatGPT, these models grapple with capturing the nuanced and ambiguous facets of language, falling short of the aspiration to build human-like conversational agents. In this work, we introduce a novel benchmark, the DiPlomat, which delves into the fundamental components of conversational pragmatic reasoning, encompassing situational context reasoning, open-world knowledge acquisition, and unified figurative language understanding. We start by collecting a new human-annotated dialogue dataset, composed of 4,177 multi-turn dialogues and a vocabulary of 48,900 words. Along with the dataset, two tasks are proposed to evaluate machines' pragmatic reasoning capabilities, namely, Pragmatic Reasoning and Identification(PIR) and Conversational Question Answering (CQA).